92 research outputs found

    Are v1 simple cells optimized for visual occlusions? : A comparative study

    Get PDF
    Abstract: Simple cells in primary visual cortex were famously found to respond to low-level image components such as edges. Sparse coding and independent component analysis (ICA) emerged as the standard computational models for simple cell coding because they linked their receptive fields to the statistics of visual stimuli. However, a salient feature of image statistics, occlusions of image components, is not considered by these models. Here we ask if occlusions have an effect on the predicted shapes of simple cell receptive fields. We use a comparative approach to answer this question and investigate two models for simple cells: a standard linear model and an occlusive model. For both models we simultaneously estimate optimal receptive fields, sparsity and stimulus noise. The two models are identical except for their component superposition assumption. We find the image encoding and receptive fields predicted by the models to differ significantly. While both models predict many Gabor-like fields, the occlusive model predicts a much sparser encoding and high percentages of ‘globular’ receptive fields. This relatively new center-surround type of simple cell response is observed since reverse correlation is used in experimental studies. While high percentages of ‘globular’ fields can be obtained using specific choices of sparsity and overcompleteness in linear sparse coding, no or only low proportions are reported in the vast majority of studies on linear models (including all ICA models). Likewise, for the here investigated linear model and optimal sparsity, only low proportions of ‘globular’ fields are observed. In comparison, the occlusive model robustly infers high proportions and can match the experimentally observed high proportions of ‘globular’ fields well. Our computational study, therefore, suggests that ‘globular’ fields may be evidence for an optimal encoding of visual occlusions in primary visual cortex. Author Summary: The statistics of our visual world is dominated by occlusions. Almost every image processed by our brain consists of mutually occluding objects, animals and plants. Our visual cortex is optimized through evolution and throughout our lifespan for such stimuli. Yet, the standard computational models of primary visual processing do not consider occlusions. In this study, we ask what effects visual occlusions may have on predicted response properties of simple cells which are the first cortical processing units for images. Our results suggest that recently observed differences between experiments and predictions of the standard simple cell models can be attributed to occlusions. The most significant consequence of occlusions is the prediction of many cells sensitive to center-surround stimuli. Experimentally, large quantities of such cells are observed since new techniques (reverse correlation) are used. Without occlusions, they are only obtained for specific settings and none of the seminal studies (sparse coding, ICA) predicted such fields. In contrast, the new type of response naturally emerges as soon as occlusions are considered. In comparison with recent in vivo experiments we find that occlusive models are consistent with the high percentages of center-surround simple cells observed in macaque monkeys, ferrets and mice

    Autonomous Cleaning of Corrupted Scanned Documents - A Generative Modeling Approach

    Full text link
    We study the task of cleaning scanned text documents that are strongly corrupted by dirt such as manual line strokes, spilled ink etc. We aim at autonomously removing dirt from a single letter-size page based only on the information the page contains. Our approach, therefore, has to learn character representations without supervision and requires a mechanism to distinguish learned representations from irregular patterns. To learn character representations, we use a probabilistic generative model parameterizing pattern features, feature variances, the features' planar arrangements, and pattern frequencies. The latent variables of the model describe pattern class, pattern position, and the presence or absence of individual pattern features. The model parameters are optimized using a novel variational EM approximation. After learning, the parameters represent, independently of their absolute position, planar feature arrangements and their variances. A quality measure defined based on the learned representation then allows for an autonomous discrimination between regular character patterns and the irregular patterns making up the dirt. The irregular patterns can thus be removed to clean the document. For a full Latin alphabet we found that a single page does not contain sufficiently many character examples. However, even if heavily corrupted by dirt, we show that a page containing a lower number of character types can efficiently and autonomously be cleaned solely based on the structural regularity of the characters it contains. In different examples using characters from different alphabets, we demonstrate generality of the approach and discuss its implications for future developments.Comment: oral presentation and Google Student Travel Award; IEEE conference on Computer Vision and Pattern Recognition 201

    Truncated Variational EM for Semi-Supervised Neural Simpletrons

    Full text link
    Inference and learning for probabilistic generative networks is often very challenging and typically prevents scalability to as large networks as used for deep discriminative approaches. To obtain efficiently trainable, large-scale and well performing generative networks for semi-supervised learning, we here combine two recent developments: a neural network reformulation of hierarchical Poisson mixtures (Neural Simpletrons), and a novel truncated variational EM approach (TV-EM). TV-EM provides theoretical guarantees for learning in generative networks, and its application to Neural Simpletrons results in particularly compact, yet approximately optimal, modifications of learning equations. If applied to standard benchmarks, we empirically find, that learning converges in fewer EM iterations, that the complexity per EM iteration is reduced, and that final likelihood values are higher on average. For the task of classification on data sets with few labels, learning improvements result in consistently lower error rates if compared to applications without truncation. Experiments on the MNIST data set herein allow for comparison to standard and state-of-the-art models in the semi-supervised setting. Further experiments on the NIST SD19 data set show the scalability of the approach when a manifold of additional unlabeled data is available

    Large Scale Clustering with Variational EM for Gaussian Mixture Models

    Full text link
    How can we efficiently find large numbers of clusters in large data sets with high-dimensional data points? Our aim is to explore the current efficiency and large-scale limits in fitting a parametric model for clustering to data distributions. To do so, we combine recent lines of research which have previously focused on separate specific methods for complexity reduction. We first show theoretically how the clustering objective of variational EM (which reduces complexity for many clusters) can be combined with coreset objectives (which reduce complexity for many data points). Secondly, we realize a concrete highly efficient iterative procedure which combines and translates the theoretical complexity gains of truncated variational EM and coresets into a practical algorithm. For very large scales, the high efficiency of parameter updates then requires (A) highly efficient coreset construction and (B) highly efficient initialization procedures (seeding) in order to avoid computational bottlenecks. Fortunately very efficient coreset construction has become available in the form of light-weight coresets, and very efficient initialization has become available in the form of AFK-MC2^2 seeding. The resulting algorithm features balanced computational costs across all constituting components. In applications to standard large-scale benchmarks for clustering, we investigate the algorithm's efficiency/quality trade-off. Compared to the best recent approaches, we observe speedups of up to one order of magnitude, and up to two orders of magnitude compared to the kk-means++ baseline. To demonstrate that the observed efficiency enables previously considered unfeasible applications, we cluster the entire and unscaled 80 Mio. Tiny Images dataset into up to 32,000 clusters. To the knowledge of the authors, this represents the largest scale fit of a parametric data model for clustering reported so far

    Changes in sensory profile and microbiological quality during chill storage of cured and uncured cooked sliced emulsion-type sausages

    Get PDF
    Nine batches of cooked sliced emulsion type sausages, produced from organic meat with or without the use of nitrite, were packed under N2/CO2 (7:3) and vacuum, respectively, and stored at 8°C. During two weeks of storage, the microbiological quality was determined, as well as the sensory quality by using profile analysis. The spoilage flora consisted of lactic acid bacteria and Brochothrix thermosphacta, without clear evidence of an inhibitory effect of nitrite on the growth of these organisms. The sensory profile showed an increase of the sour and rancid aroma and flavour, in parallel to the growth of these microorganisms. These changes were more pronounced in the batches prepared without nitrite. Spots of red discolourations were occasionally observed during storage of uncured sausage slices. These may have been caused by reduction of traces of nitrate from spices by psychrotrophic Enterobacteriaceae

    Machine Learning: Binary Non-negative Matrix Factorization

    Get PDF
    This bachelor thesis theoretically derives and implements an unsupervised probabilistic generative model called Binary Non-Negative Matrix Factorization. It is a simplification of the standard Non-Negative Matrix Factorization where the factorization into two matrices is restricted to one of them having only binary components instead of continuous components. This simplifies the computation making it exactly solvable while keeping most of the learning capabilities and connects the algorithm to a modified version of Binary Sparse Coding. The learning phase of the model is performed using the EM algorithm, an iterative method that maximizes the likelihood function with respect to the parameters to be learned in a two-step process. The model is tested on artificial data and it is shown to learn the hidden parameters on these simple data although it fails to work properly when applied to real data

    Truncated Variational Sampling for "Black Box" Optimization of Generative Models

    Get PDF
    We investigate the optimization of two probabilistic generative models with binary latent variables using a novel variational EM approach. The approach distinguishes itself from previous variational approaches by using latent states as variational parameters. Here we use efficient and general purpose sampling procedures to vary the latent states, and investigate the "black box" applicability of the resulting optimization procedure. For general purpose applicability, samples are drawn from approximate marginal distributions of the considered generative model as well as from the model's prior distribution. As such, variational sampling is defined in a generic form, and is directly executable for a given model. As a proof of concept, we then apply the novel procedure (A) to Binary Sparse Coding (a model with continuous observables), and (B) to basic Sigmoid Belief Networks (which are models with binary observables). Numerical experiments verify that the investigated approach efficiently as well as effectively increases a variational free energy objective without requiring any additional analytical steps
    • …
    corecore